80 research outputs found
Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions
We develop a robust uncertainty principle for finite signals in C^N which
states that for almost all subsets T,W of {0,...,N-1} such that |T|+|W| ~ (log
N)^(-1/2) N, there is no sigal f supported on T whose discrete Fourier
transform is supported on W. In fact, we can make the above uncertainty
principle quantitative in the sense that if f is supported on T, then only a
small percentage of the energy (less than half, say) of its Fourier transform
is concentrated on W.
As an application of this robust uncertainty principle (QRUP), we consider
the problem of decomposing a signal into a sparse superposition of spikes and
complex sinusoids. We show that if a generic signal f has a decomposition using
spike and frequency locations in T and W respectively, and obeying |T| + |W| <=
C (\log N)^{-1/2} N, then this is the unique sparsest possible decomposition
(all other decompositions have more non-zero terms). In addition, if |T| + |W|
<= C (\log N)^{-1} N, then this sparsest decomposition can be found by solving
a convex optimization problem.Comment: 25 pages, 9 figure
The Dantzig selector: Statistical estimation when is much larger than
In many important statistical applications, the number of variables or
parameters is much larger than the number of observations . Suppose then
that we have observations , where is a
parameter vector of interest, is a data matrix with possibly far fewer rows
than columns, , and the 's are i.i.d. . Is it
possible to estimate reliably based on the noisy data ? To estimate
, we introduce a new estimator--we call it the Dantzig selector--which
is a solution to the -regularization problem \min_{\tilde{\b
eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad
\|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma, where is
the residual vector and is a positive scalar. We show
that if obeys a uniform uncertainty principle (with unit-normed columns)
and if the true parameter vector is sufficiently sparse (which here
roughly guarantees that the model is identifiable), then with very large
probability, Our results are
nonasymptotic and we give values for the constant . Even though may be
much smaller than , our estimator achieves a loss within a logarithmic
factor of the ideal mean squared error one would achieve with an oracle which
would supply perfect information about which coordinates are nonzero, and which
were above the noise level. In multivariate regression and from a model
selection viewpoint, our result says that it is possible nearly to select the
best subset of variables by solving a very simple convex program, which, in
fact, can easily be recast as a convenient linear program (LP).Comment: This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126],
[arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv:0803.3135].
Rejoinder in [arXiv:0803.3136]. Published in at
http://dx.doi.org/10.1214/009053606000001523 the Annals of Statistics
(http://www.imstat.org/aos/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Decoding by Linear Programming
This paper considers the classical error correcting problem which is
frequently discussed in coding theory. We wish to recover an input vector from corrupted measurements . Here, is an by
(coding) matrix and is an arbitrary and unknown vector of errors. Is it
possible to recover exactly from the data ? We prove that under suitable
conditions on the coding matrix , the input is the unique solution to
the -minimization problem () provided that the support of the vector of
errors is not too large, for some . In short, can be recovered exactly by solving a
simple convex optimization problem (which one can recast as a linear program).
In addition, numerical experiments suggest that this recovery procedure works
unreasonably well; is recovered exactly even in situations where a
significant fraction of the output is corrupted.Comment: 22 pages, 4 figures, submitte
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Suppose we are given a vector in . How many linear measurements do
we need to make about to be able to recover to within precision
in the Euclidean () metric? Or more exactly, suppose we are
interested in a class of such objects--discrete digital signals,
images, etc; how many linear measurements do we need to recover objects from
this class to within accuracy ? This paper shows that if the objects
of interest are sparse or compressible in the sense that the reordered entries
of a signal decay like a power-law (or if the coefficient
sequence of in a fixed basis decays like a power-law), then it is possible
to reconstruct to within very high accuracy from a small number of random
measurements.Comment: 39 pages; no figures; to appear. Bernoulli ensemble proof has been
corrected; other expository and bibliographical changes made, incorporating
referee's suggestion
A probabilistic and RIPless theory of compressed sensing
This paper introduces a simple and very general theory of compressive
sensing. In this theory, the sensing mechanism simply selects sensing vectors
independently at random from a probability distribution F; it includes all
models - e.g. Gaussian, frequency measurements - discussed in the literature,
but also provides a framework for new measurement strategies as well. We prove
that if the probability distribution F obeys a simple incoherence property and
an isotropy property, one can faithfully recover approximately sparse signals
from a minimal number of noisy measurements. The novelty is that our recovery
results do not require the restricted isometry property (RIP) - they make use
of a much weaker notion - or a random model for the signal. As an example, the
paper shows that a signal with s nonzero entries can be faithfully recovered
from about s log n Fourier coefficients that are contaminated with noise.Comment: 36 page
- …